24 research outputs found

    Reducing the Arity in Unbiased Black-Box Complexity

    Full text link
    We show that for all 1<klogn1<k \leq \log n the kk-ary unbiased black-box complexity of the nn-dimensional \onemax function class is O(n/k)O(n/k). This indicates that the power of higher arity operators is much stronger than what the previous O(n/logk)O(n/\log k) bound by Doerr et al. (Faster black-box algorithms through higher arity operators, Proc. of FOGA 2011, pp. 163--172, ACM, 2011) suggests. The key to this result is an encoding strategy, which might be of independent interest. We show that, using kk-ary unbiased variation operators only, we may simulate an unrestricted memory of size O(2k)O(2^k) bits.Comment: An extended abstract of this paper has been accepted for inclusion in the proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2012

    Playing Mastermind With Constant-Size Memory

    Get PDF
    We analyze the classic board game of Mastermind with nn holes and a constant number of colors. A result of Chv\'atal (Combinatorica 3 (1983), 325-329) states that the codebreaker can find the secret code with Θ(n/logn)\Theta(n / \log n) questions. We show that this bound remains valid if the codebreaker may only store a constant number of guesses and answers. In addition to an intrinsic interest in this question, our result also disproves a conjecture of Droste, Jansen, and Wegener (Theory of Computing Systems 39 (2006), 525-544) on the memory-restricted black-box complexity of the OneMax function class.Comment: 23 page

    Toward a complexity theory for randomized search heuristics : black-box models

    Get PDF
    Randomized search heuristics are a broadly used class of general-purpose algorithms. Analyzing them via classical methods of theoretical computer science is a growing field. While several strong runtime bounds exist, a powerful complexity theory for such algorithms is yet to be developed. We contribute to this goal in several aspects. In a first step, we analyze existing black-box complexity models. Our results indicate that these models are not restrictive enough. This remains true if we restrict the memory of the algorithms under consideration. These results motivate us to enrich the existing notions of black-box complexity by the additional restriction that not actual objective values, but only the relative quality of the previously evaluated solutions may be taken into account by the algorithms. Many heuristics belong to this class of algorithms. We show that our ranking-based model gives more realistic complexity estimates for some problems, while for others the low complexities of the previous models still hold. Surprisingly, our results have an interesting game-theoretic aspect as well.We show that analyzing the black-box complexity of the OneMaxn function class—a class often regarded to analyze how heuristics progress in easy parts of the search space—is the same as analyzing optimal winning strategies for the generalized Mastermind game with 2 colors and length-n codewords. This connection was seemingly overlooked so far in the search heuristics community.Randomisierte Suchheuristiken sind vielseitig einsetzbare Algorithmen, die aufgrund ihrer hohen Flexibilität nicht nur im industriellen Kontext weit verbreitet sind. Trotz zahlreicher erfolgreicher Anwendungsbeispiele steckt die Laufzeitanalyse solcher Heuristiken noch in ihren Kinderschuhen. Insbesondere fehlt es uns an einem guten Verständnis, in welchen Situationen problemunabhängige Heuristiken in kurzer Laufzeit gute Lösungen liefern können. Eine Komplexitätstheorie ähnlich wie es sie in der klassischen Algorithmik gibt, wäre wünschenswert. Mit dieser Arbeit tragen wir zur Entwicklung einer solchen Komplexitätstheorie für Suchheuristiken bei. Wir zeigen anhand verschiedener Beispiele, dass existierende Modelle die Schwierigkeit eines Problems nicht immer zufriedenstellend erfassen. Wir schlagen daher ein weiteres Modell vor. In unserem Ranking-Based Black-Box Model lernen die Algorithmen keine exakten Funktionswerte, sondern bloß die Rangordnung der bislang angefragten Suchpunkte. Dieses Modell gibt für manche Probleme eine bessere Einschätzung der Schwierigkeit. Wir zeigen jedoch auch, dass auch im neuen Modell Probleme existieren, deren Komplexität als zu gering einzuschätzen ist. Unsere Ergebnisse haben auch einen spieltheoretischen Aspekt. Optimale Gewinnstrategien für den Rater im Mastermindspiel (auch SuperHirn) mit n Positionen entsprechen genau optimalen Algorithmen zur Maximierung von OneMaxn-Funktionen. Dieser Zusammenhang wurde scheinbar bislang übersehen. Diese Arbeit ist in englischer Sprache verfasst

    A New Randomized Algorithm to Approximate the Star Discrepancy Based on Threshold Accepting

    Get PDF
    We present a new algorithm for estimating the star discrepancy of arbitrary point sets. Similar to the algorithm for discrepancy approximation of Winker and Fang [SIAM J. Numer. Anal. 34 (1997), 2028{2042] it is based on the optimization algorithm threshold accepting. Our improvements include, amongst others, a non-uniform sampling strategy which is more suited for higher-dimensional inputs and additionally takes into account the topological characteristics of given point sets, and rounding steps which transform axis-parallel boxes, on which the discrepancy is to be tested, into critical test boxes. These critical test boxes provably yield higher discrepancy values, and contain the box that exhibits the maximum value of the local discrepancy. We provide comprehensive experiments to test the new algorithm. Our randomized algorithm computes the exact discrepancy frequently in all cases where this can be checked (i.e., where the exact discrepancy of the point set can be computed in feasible time). Most importantly, in higher dimension the new method behaves clearly better than all previously known methods

    Faster Black-Box Algorithms Through Higher Arity Operators

    Full text link
    We extend the work of Lehre and Witt (GECCO 2010) on the unbiased black-box model by considering higher arity variation operators. In particular, we show that already for binary operators the black-box complexity of \leadingones drops from Θ(n2)\Theta(n^2) for unary operators to O(nlogn)O(n \log n). For \onemax, the Ω(nlogn)\Omega(n \log n) unary black-box complexity drops to O(n) in the binary case. For kk-ary operators, knk \leq n, the \onemax-complexity further decreases to O(n/logk)O(n/\log k).Comment: To appear at FOGA 201

    In Richtung einer Komplexitätstheorie für randomisierte Suchheuristiken : Black-Box-Modelle

    No full text
    Randomized search heuristics are a broadly used class of general-purpose algorithms. Analyzing them via classical methods of theoretical computer science is a growing field. While several strong runtime bounds exist, a powerful complexity theory for such algorithms is yet to be developed. We contribute to this goal in several aspects. In a first step, we analyze existing black-box complexity models. Our results indicate that these models are not restrictive enough. This remains true if we restrict the memory of the algorithms under consideration. These results motivate us to enrich the existing notions of black-box complexity by the additional restriction that not actual objective values, but only the relative quality of the previously evaluated solutions may be taken into account by the algorithms. Many heuristics belong to this class of algorithms. We show that our ranking-based model gives more realistic complexity estimates for some problems, while for others the low complexities of the previous models still hold. Surprisingly, our results have an interesting game-theoretic aspect as well.We show that analyzing the black-box complexity of the OneMaxn function class—a class often regarded to analyze how heuristics progress in easy parts of the search space—is the same as analyzing optimal winning strategies for the generalized Mastermind game with 2 colors and length-n codewords. This connection was seemingly overlooked so far in the search heuristics community.Randomisierte Suchheuristiken sind vielseitig einsetzbare Algorithmen, die aufgrund ihrer hohen Flexibilität nicht nur im industriellen Kontext weit verbreitet sind. Trotz zahlreicher erfolgreicher Anwendungsbeispiele steckt die Laufzeitanalyse solcher Heuristiken noch in ihren Kinderschuhen. Insbesondere fehlt es uns an einem guten Verständnis, in welchen Situationen problemunabhängige Heuristiken in kurzer Laufzeit gute Lösungen liefern können. Eine Komplexitätstheorie ähnlich wie es sie in der klassischen Algorithmik gibt, wäre wünschenswert. Mit dieser Arbeit tragen wir zur Entwicklung einer solchen Komplexitätstheorie für Suchheuristiken bei. Wir zeigen anhand verschiedener Beispiele, dass existierende Modelle die Schwierigkeit eines Problems nicht immer zufriedenstellend erfassen. Wir schlagen daher ein weiteres Modell vor. In unserem Ranking-Based Black-Box Model lernen die Algorithmen keine exakten Funktionswerte, sondern bloß die Rangordnung der bislang angefragten Suchpunkte. Dieses Modell gibt für manche Probleme eine bessere Einschätzung der Schwierigkeit. Wir zeigen jedoch auch, dass auch im neuen Modell Probleme existieren, deren Komplexität als zu gering einzuschätzen ist. Unsere Ergebnisse haben auch einen spieltheoretischen Aspekt. Optimale Gewinnstrategien für den Rater im Mastermindspiel (auch SuperHirn) mit n Positionen entsprechen genau optimalen Algorithmen zur Maximierung von OneMaxn-Funktionen. Dieser Zusammenhang wurde scheinbar bislang übersehen. Diese Arbeit ist in englischer Sprache verfasst

    Memory-restricted black-box complexity of OneMax

    No full text
    International audienc

    Finding optimal volume subintervals with k points and calculating the star discrepancy are NP-hard problems

    Get PDF
    AbstractThe well-known star discrepancy is a common measure for the uniformity of point distributions. It is used, e.g., in multivariate integration, pseudo random number generation, experimental design, statistics, or computer graphics.We study here the complexity of calculating the star discrepancy of point sets in the d-dimensional unit cube and show that this is an NP-hard problem.To establish this complexity result, we first prove NP-hardness of the following related problems in computational geometry: Given n points in the d-dimensional unit cube, find a subinterval of minimum or maximum volume that contains k of the n points.Our results for the complexity of the subinterval problems settle a conjecture of E. Thiémard [E. Thiémard, Optimal volume subintervals with k points and star discrepancy via integer programming, Math. Meth. Oper. Res. 54 (2001) 21–45]
    corecore